A European Parliament committee voted to strengthen the flagship legislative proposal as it heads toward passageNews 

How Europe is leading the world in building guardrails around artificial intelligence

Authorities around the world are racing to come up with AI rules, including in the European Union, where draft legislation reached a crucial moment on Thursday.

A European Parliament committee voted to strengthen a flagship legislative proposal as it moves towards approval, part of a years-long push by Brussels to create artificial intelligence guardrails. These efforts have become even more urgent as the rapid development of chatbots like ChatGPT highlights the benefits—and the new dangers—that emerging technology brings.

Here’s an overview of EU AI law:

HOW DO THE RULES WORK?

The AI Act, first proposed in 2021, covers any product or service that uses an AI system. The law classifies artificial intelligence systems according to four risk levels, from minimal to unacceptable. The most risky applications face more stringent requirements, including transparency and the use of accurate data. Think of it as an “AI risk management system,” said Johann Laux, an expert at the Oxford Internet Institute.

WHAT ARE THE RISKS?

One of the EU’s main goals is to protect itself from all AI threats to health and safety and to protect fundamental rights and values.

This means that some uses of AI are strictly prohibited, such as “social scoring” systems that rate people based on their behavior. AI that exploits vulnerable people, such as children, or that uses subliminal manipulation that can cause harm, such as an interactive talking toy that encourages dangerous behavior, is also prohibited.

Lawmakers bolstered the proposal by voting to ban predictive police tools that crunch data to predict where crimes will occur and who will commit them. They also approved an expanded ban on remote facial recognition, with a few law enforcement exceptions, such as preventing a specific terrorist threat. The technology scans passers-by and connects their faces to the database using artificial intelligence.

The aim is to “avoid a controlled society based on artificial intelligence,” Italian lawmaker Brando Benifei, who heads the European Parliament’s AI work, told reporters on Wednesday. “We think these technologies could be used for bad instead of good, and we think the risks are too high.”

Artificial intelligence systems used in high-risk categories such as work life and education, which would affect the course of a person’s life, have strict requirements, such as transparency to users and the introduction of risk assessment and risk management measures.

The EU executive says that most artificial intelligence systems, such as video games or spam filters, fall into the low or no-risk category.

what about CHATGPT?

The original 108-page proposal barely mentioned chatbots, only calling for them to be labeled so users know they’re interacting with a machine. Later, negotiators added provisions to cover general-purpose AI like ChatGPT and set them some of the same requirements as high-risk systems.

One important addition is the requirement to thoroughly document any copyrighted material used for AI systems to create text, images, videos or music that resembles human work. This would let content creators know if their blog posts, digital books, scientific articles or pop songs have been used to train algorithms using systems like ChatGPT. They can then decide if their work has been copied and seek compensation.

WHY ARE EU RULES SO IMPORTANT?

The European Union is not a major player in the development of cutting-edge artificial intelligence. This role is taken by the United States and China. But Brussels often leads the way with regulations that tend to become de facto global standards.

“Europeans are quite rich globally and there are a lot of them,” so companies and organizations often decide that the sheer size of the bloc’s internal market and 450 million consumers make compliance easier than developing different products for different regions, Laux said.

But it’s not just about robbery. By setting common rules for artificial intelligence, Brussels is also trying to develop the market by instilling user confidence, Laux said.

“The thinking behind it is that if you can get people to trust AI and apps, they’ll use it more,” Laux said. “And as they use it more, they will unlock the economic and social potential of AI.”

WHAT IF YOU BREAK THE RULES?

Violations can result in fines of up to 30 million euros ($33 million), or 6 percent of a company’s annual global revenue, which in the case of tech companies like Google and Microsoft can run into the billions.

WHAT NEXT?

It may take years before the rules take full effect. European Union lawmakers are scheduled to vote on the bill in a plenary session in mid-June. It will then move into three-way talks involving the bloc’s 27 member states, parliament and the executive commission, where it could face more changes as they wrangle details. The final approval is expected to be completed by the end of the year or at the beginning of 2024 at the latest, after which the adjustment period for companies and organizations is often around two years.

Read all the Latest Tech News here.

Related posts

Leave a Comment